Search results for "hyperparameter optimization"
showing 5 items of 5 documents
Adjusted bat algorithm for tuning of support vector machine parameters
2016
Support vector machines are powerful and often used technique of supervised learning applied to classification. Quality of the constructed classifier can be improved by appropriate selection of the learning parameters. These parameters are often tuned using grid search with relatively large step. This optimization process can be done computationally more efficiently and more precisely using stochastic search metaheuristics. In this paper we propose adjusted bat algorithm for support vector machines parameter optimization and show that compared to the grid search it leads to a better classifier. We tested our approach on standard set of benchmark data sets from UCI machine learning repositor…
An LP-based hyperparameter optimization model for language modeling
2018
In order to find hyperparameters for a machine learning model, algorithms such as grid search or random search are used over the space of possible values of the models hyperparameters. These search algorithms opt the solution that minimizes a specific cost function. In language models, perplexity is one of the most popular cost functions. In this study, we propose a fractional nonlinear programming model that finds the optimal perplexity value. The special structure of the model allows us to approximate it by a linear programming model that can be solved using the well-known simplex algorithm. To the best of our knowledge, this is the first attempt to use optimization techniques to find per…
A heuristic, iterative algorithm for change-point detection in abrupt change models
2017
Change-point detection in abrupt change models is a very challenging research topic in many fields of both methodological and applied Statistics. Due to strong irregularities, discontinuity and non-smootheness, likelihood based procedures are awkward; for instance, usual optimization methods do not work, and grid search algorithms represent the most used approach for estimation. In this paper a heuristic, iterative algorithm for approximate maximum likelihood estimation is introduced for change-point detection in piecewise constant regression models. The algorithm is based on iterative fitting of simple linear models, and appears to extend easily to more general frameworks, such as models i…
Online Hyperparameter Search Interleaved with Proximal Parameter Updates
2021
There is a clear need for efficient hyperparameter optimization (HO) algorithms for statistical learning, since commonly applied search methods (such as grid search with N-fold cross-validation) are inefficient and/or approximate. Previously existing gradient-based HO algorithms that rely on the smoothness of the cost function cannot be applied in problems such as Lasso regression. In this contribution, we develop a HO method that relies on the structure of proximal gradient methods and does not require a smooth cost function. Such a method is applied to Leave-one-out (LOO)-validated Lasso and Group Lasso, and an online variant is proposed. Numerical experiments corroborate the convergence …
Implicit differentiation for fast hyperparameter selection in non-smooth convex learning
2022
International audience; Finding the optimal hyperparameters of a model can be cast as a bilevel optimization problem, typically solved using zero-order techniques. In this work we study first-order methods when the inner optimization problem is convex but non-smooth. We show that the forward-mode differentiation of proximal gradient descent and proximal coordinate descent yield sequences of Jacobians converging toward the exact Jacobian. Using implicit differentiation, we show it is possible to leverage the non-smoothness of the inner problem to speed up the computation. Finally, we provide a bound on the error made on the hypergradient when the inner optimization problem is solved approxim…